My model of Benquo (in particular after a recent comment thread) is somewhat skeptical that it’s good idea to treat conflict and action asymmetrically.
I strongly believe it’s wrong to apply a higher burden to criticism of calls to action (or arguments offered in that context), than to the calls to action themselves. The frame in which we’re lumping everything someone feels personally attacked by together as “conflict” basically gives everyone proposing something an unprincipled veto, letting them reclassify any criticism as “conflict” by framing the criticism as an attack on them or their allies.
I agree that people have a justified expectation that criticism actually is meant as an attack, but that just means we have to solve a hard problem. If we bounce off it instead, then this isn’t really a rationality site, it’s just a weird social club with shared rationality-related applause lights.
And realizing I think I basically knew that “somewhat skeptical” was not an accurate way to describe your beliefs, and I think the algorithm that led me to write it that way was running through some sort of modesty or conflict-mediation filter that I don’t endorse. Mostly noting for my own reference
Duncan’s suggestion here seems like it has the right mood—treating discussion of things someone might feel attacked by as an important enough class to commit resources to, and including the point of view of the people who feel attacked. Third parties are needed in such cases. Imposing all the work on a small fixed class of moderators seems like it imposes a high burden on a few people.
One thing I’ve had occasion to want a couple times is something like an epistemic court. I have within the past several months felt a strong need for shared institutions that allow me to sue or be sued for being knowably wrong. Unlike state courts, I don’t see any need for a body that can award damages, just one that can make judgments. Without this, if someone claims I have a blind spot, it’s very hard for me to know when to actually terminate my own attempt to find it, since “no, YOU have a blind spot!” is sometimes true, but very hard to be subjectively confident of.
In any case, my intuition that courts would be helpful I think has something important in common with Duncan’s intuition that more active moderation would be helpful. There’s something wrong with the sort of debate club norms we have now. We’re focused more on making valid arguments than finding the truth, which leaves us vulnerable to large classes of trolling.
I think there’s been an implicit procedural-liberal bias to much discussion of moderation, where it’s assumed that we can agree on rules in lieu of a shared perspective. But this doesn’t actually work for getting to the truth, because it’s vulnerable to both manufactured spurious grievances, and illegible attacks that evade the detection of legible rules, without any real mechanism for adjudicating when we want to classify conflicts as one or the other (or both, or some third thing).
A lot of why I’ve been skeptical of the idea of a generic forum over the last few years, is that it seems to me like people who are trying to figure something specific out—who have a perspective which in some concrete interested way wants to be made more correct—are going to have a huge advantage at filtering constructive from unconstructive comments, vs people who are trying to comply with the rules of good thinking. Cf. Something to Protect.
I think I agree with most of the basic concepts here, and disagreements are mostly of the form “given current resources, what goals are practical to set and achieve?”
I think both having more active moderation of the type Duncan describes would be good, as would an epistemic court, and the only argument I have against them is that they’re expensive. Epistemic Court seems potentially more viable because it doesn’t necessarily need to be used all the time – it’s expensive but if only used on the most important cases it might be affordable.
The sorts of systems that I think LW is exploring right now are ones that “solve problems with technology, rather than cognitive effort, when possible.” Competent people are busy and the world is big, so it makes more sense to do things like nudges that require minimal effort form moderators to maintain. (the parts of Duncan’s suggestions that we’ve come closest to implementing are things that make it easy for moderators to at least skim each new post, and take a few quick actions)
This does mean there are limits on what sort of place LessWrong can be.
A lot of why I’ve been skeptical of the idea of a generic forum over the last few years, is that it seems to me like people who are trying to figure something specific out—who have a perspective which in some concrete interested way wants to be made more correct—are going to have a huge advantage at filtering constructive from unconstructive comments, vs people who are trying to comply with the rules of good thinking
This does sound like a good description of the problem.
I agree that people have a justified expectation that criticism actually is meant as an attack, but that just means we have to solve a hard problem. If we bounce off it instead, then this isn’t really a rationality site, it’s just a weird social club with shared rationality-related applause lights.
I definitely think of solving this as part of my longterm goal. But a major disagreement is that “if you can’t solve this, you’re just left with a weird social club.” (This was also a major disagreement of mine with Duncan).
I think there are lots of things you can achieve that are massive improvements over the status quo, that don’t require solving this problem. There are probably around 20 major characteristics I wish each LW user had (such as “be able to think in probabilites” and “be able to generate hypotheses for confusing phenomena”), and most of them can be improved with “regular learning and practice”, and nudges, rather than overcoming weird adversarial anti-inductive dynamics.
LessWrong isn’t as good as many small, private, heavily filtered spaces, but a) it’s present form still seems like a significant improvement as far as public forums go over most alternatives in the same reference class, and b) I think there’s a bunch of room for further improvement.
A major example the team is exploring is the Open Questions feature. An important aspect of it is that sort of forces people to focus on the object level, and on actually figuring things out. It’s harder to have a demon thread when the frame is “help answer this question.” And meanwhile it can start to direct people’s default behavior from “sort of just hang out on the internet” to “actually do intellectual labor that solves a problem.”
There are probably around 20 major characteristics I wish each LW user had (such as “be able to think in probabilites” and “be able to generate hypotheses for confusing phenomena”), and most of them can be improved with “regular learning and practice”, and nudges, rather than overcoming weird adversarial anti-inductive dynamics.
Why would this matter at all for any purpose that might related to the use of rivalrous goods in an environment where there’s no solution to adversarial epistemics? What’s your model for how that could work?
I’m not sure what you mean. I agree solving adversarial epistemics is quite important and among the top priorities for the rationality project. But why would that be necessary to get any value out of empiricism/scholarship/etc?
Capitalism is built out of adversarial epistemics, which often results in waste, but still has generated tremendous value, as has science and academia. I wouldn’t consider the typical company or research department a “weird social club” just because they hadn’t solved that yet –
Does that comparison seem wrong? Do you in fact consider most businesses weird social clubs? I’m not sure what you’re trying to get at here.
One thing re: missing moods is that while I think there’s room for improvement on the “be able to make criticisms without them being attacks” front, I think solving this looks quite different from the way you (and Duncan of 1.5 years ago) were trying to solve it.
There are fundamental limitations of a public forum, and of sprawling, heated discussions in particular. I think it will always require costly demonstrations of good faith in order to do make strong criticisms in public without being perceived as attacking. I think if you attempt to do this, you are just laying down norms that enable and incentive politicians, resulting in less clarity, not more.
But there are two options that both seem relatively straightforward to me:
1. Make criticisms, and employ a lot of costly signaling that you are arguing in good faith.
2. Have a norm wherein people discuss criticism in private, and then afterwards publish a public document that they both endorse. (This may in some cases require counterfactual willingness to write critiques that are attacks)
I generally prefer the latter once a conversation has begun to branch and get heated. Once a conversation has become multithreaded and involve serious disagreements, maintaining good faith becomes exponentially more expensive.
(I also think it’s just sort of okay for there to be a mutual understanding and clarify that some classes of feedback need to be treated as indistinguishable from attacks, which means they need to be somewhat socially punished to disincentive coalition politics, but that doesn’t mean they don’t also get listened to)
[meta note, replying to you because we don’t yet have a good process for notifications that don’t rely on replying to a person:
I don’t know that this went anywhere important enough to publish, but fwiw, since my model of you puts at least some value to things being public and I don’t personally object, if you wanted me to turn this from a draft-to-a-public-post that’d be fine.]
I strongly believe it’s wrong to apply a higher burden to criticism of calls to action (or arguments offered in that context), than to the calls to action themselves. The frame in which we’re lumping everything someone feels personally attacked by together as “conflict” basically gives everyone proposing something an unprincipled veto, letting them reclassify any criticism as “conflict” by framing the criticism as an attack on them or their allies.
I agree that people have a justified expectation that criticism actually is meant as an attack, but that just means we have to solve a hard problem. If we bounce off it instead, then this isn’t really a rationality site, it’s just a weird social club with shared rationality-related applause lights.
Noticing:
vs
And realizing I think I basically knew that “somewhat skeptical” was not an accurate way to describe your beliefs, and I think the algorithm that led me to write it that way was running through some sort of modesty or conflict-mediation filter that I don’t endorse. Mostly noting for my own reference
What sort of solutions might work?
Duncan’s suggestion here seems like it has the right mood—treating discussion of things someone might feel attacked by as an important enough class to commit resources to, and including the point of view of the people who feel attacked. Third parties are needed in such cases. Imposing all the work on a small fixed class of moderators seems like it imposes a high burden on a few people.
One thing I’ve had occasion to want a couple times is something like an epistemic court. I have within the past several months felt a strong need for shared institutions that allow me to sue or be sued for being knowably wrong. Unlike state courts, I don’t see any need for a body that can award damages, just one that can make judgments. Without this, if someone claims I have a blind spot, it’s very hard for me to know when to actually terminate my own attempt to find it, since “no, YOU have a blind spot!” is sometimes true, but very hard to be subjectively confident of.
In any case, my intuition that courts would be helpful I think has something important in common with Duncan’s intuition that more active moderation would be helpful. There’s something wrong with the sort of debate club norms we have now. We’re focused more on making valid arguments than finding the truth, which leaves us vulnerable to large classes of trolling.
I think there’s been an implicit procedural-liberal bias to much discussion of moderation, where it’s assumed that we can agree on rules in lieu of a shared perspective. But this doesn’t actually work for getting to the truth, because it’s vulnerable to both manufactured spurious grievances, and illegible attacks that evade the detection of legible rules, without any real mechanism for adjudicating when we want to classify conflicts as one or the other (or both, or some third thing).
A lot of why I’ve been skeptical of the idea of a generic forum over the last few years, is that it seems to me like people who are trying to figure something specific out—who have a perspective which in some concrete interested way wants to be made more correct—are going to have a huge advantage at filtering constructive from unconstructive comments, vs people who are trying to comply with the rules of good thinking. Cf. Something to Protect.
I think I agree with most of the basic concepts here, and disagreements are mostly of the form “given current resources, what goals are practical to set and achieve?”
I think both having more active moderation of the type Duncan describes would be good, as would an epistemic court, and the only argument I have against them is that they’re expensive. Epistemic Court seems potentially more viable because it doesn’t necessarily need to be used all the time – it’s expensive but if only used on the most important cases it might be affordable.
The sorts of systems that I think LW is exploring right now are ones that “solve problems with technology, rather than cognitive effort, when possible.” Competent people are busy and the world is big, so it makes more sense to do things like nudges that require minimal effort form moderators to maintain. (the parts of Duncan’s suggestions that we’ve come closest to implementing are things that make it easy for moderators to at least skim each new post, and take a few quick actions)
This does mean there are limits on what sort of place LessWrong can be.
This does sound like a good description of the problem.
I definitely think of solving this as part of my longterm goal. But a major disagreement is that “if you can’t solve this, you’re just left with a weird social club.” (This was also a major disagreement of mine with Duncan).
I think there are lots of things you can achieve that are massive improvements over the status quo, that don’t require solving this problem. There are probably around 20 major characteristics I wish each LW user had (such as “be able to think in probabilites” and “be able to generate hypotheses for confusing phenomena”), and most of them can be improved with “regular learning and practice”, and nudges, rather than overcoming weird adversarial anti-inductive dynamics.
LessWrong isn’t as good as many small, private, heavily filtered spaces, but a) it’s present form still seems like a significant improvement as far as public forums go over most alternatives in the same reference class, and b) I think there’s a bunch of room for further improvement.
A major example the team is exploring is the Open Questions feature. An important aspect of it is that sort of forces people to focus on the object level, and on actually figuring things out. It’s harder to have a demon thread when the frame is “help answer this question.” And meanwhile it can start to direct people’s default behavior from “sort of just hang out on the internet” to “actually do intellectual labor that solves a problem.”
Why would this matter at all for any purpose that might related to the use of rivalrous goods in an environment where there’s no solution to adversarial epistemics? What’s your model for how that could work?
I’m not sure what you mean. I agree solving adversarial epistemics is quite important and among the top priorities for the rationality project. But why would that be necessary to get any value out of empiricism/scholarship/etc?
Capitalism is built out of adversarial epistemics, which often results in waste, but still has generated tremendous value, as has science and academia. I wouldn’t consider the typical company or research department a “weird social club” just because they hadn’t solved that yet –
Does that comparison seem wrong? Do you in fact consider most businesses weird social clubs? I’m not sure what you’re trying to get at here.
One thing re: missing moods is that while I think there’s room for improvement on the “be able to make criticisms without them being attacks” front, I think solving this looks quite different from the way you (and Duncan of 1.5 years ago) were trying to solve it.
There are fundamental limitations of a public forum, and of sprawling, heated discussions in particular. I think it will always require costly demonstrations of good faith in order to do make strong criticisms in public without being perceived as attacking. I think if you attempt to do this, you are just laying down norms that enable and incentive politicians, resulting in less clarity, not more.
But there are two options that both seem relatively straightforward to me:
1. Make criticisms, and employ a lot of costly signaling that you are arguing in good faith.
2. Have a norm wherein people discuss criticism in private, and then afterwards publish a public document that they both endorse. (This may in some cases require counterfactual willingness to write critiques that are attacks)
I generally prefer the latter once a conversation has begun to branch and get heated. Once a conversation has become multithreaded and involve serious disagreements, maintaining good faith becomes exponentially more expensive.
(I also think it’s just sort of okay for there to be a mutual understanding and clarify that some classes of feedback need to be treated as indistinguishable from attacks, which means they need to be somewhat socially punished to disincentive coalition politics, but that doesn’t mean they don’t also get listened to)
[meta note, replying to you because we don’t yet have a good process for notifications that don’t rely on replying to a person:
I don’t know that this went anywhere important enough to publish, but fwiw, since my model of you puts at least some value to things being public and I don’t personally object, if you wanted me to turn this from a draft-to-a-public-post that’d be fine.]